Resources

People often ask me "How did you learn how to hack?" The answer: by reading. This page is a collection of the blog posts and other articles that I have accumulated over the years of my journey. Enjoy!

How We Broke Exchanges: A Deep Dive Into Authentication And Client-Side Bugs- 1936

OtterSec    Reference →Posted 7 Hours Ago
  • A common OAuth misconfiguration is allowlisting localhost for development purposes. When enabled in production, this can lead to an application running on the device to steal OAuth codes via redirects to itself. The same issue could appear with CORS.

SharePoint ToolShell – One Request PreAuth RCE Chain- 1935

viettel    Reference →Posted 8 Hours Ago
  • The first vulnerability that the author found was a deserialization vulnerability. In SharePoint, there is arbitrary deserialization of DataSet and DataTable in some functions. Because DataSet is a well-known gadget in ysoserial, Microsoft has a filtering mechanism. It will strip out all other serialization information except for XmlSchema and XmlDiffGram.
  • The type validation doesn't allow for anything besides a simple type allowlist. However, this validation doesn't work on nested types, such as a type within an array. This allows for bypassing the allowlist check and getting RCE via known deserialization bugs. This attack requires authentication. So, the author started looking for ways to trigger this functionality without auth. SharePoint appears to have generalized auth, and page-level auth to circumvent.
  • It's possible to trigger simple ToolPane functionality to reach this. First, if the Referrer is set to a specific value, then it bypasses authentication. Next, they need to trigger the vulnerability prior to the page verificatoin from occuring on the Load() event. Byb combining the usage of ToolPane and SPWebPartManager, an attacker can force SharePoint to trigger the vulnerable code prior to the full ASP.NET lifecycle taking place. All of this was just reverse-engineering the application and seeing which paths could be hit.
  • The rest of the blog post is slightly hard to follow. Regardless, it's an interesting look into the ASP.NET and SharePoint security world. The bug is super impactful and a cool Pwn2Own entry.

One Missing Check, $500M at Risk: MsgBatchUpdateOrders Let Anyone Drain Any Account on Injective- 1934

al-f4lc0n    Reference →Posted 11 Hours Ago
  • Injective is a Cosmos-based blockchain that includes an EVM runtime, in addition to the regular Cosmos features. It contains a subaccount module in which the account must be owned by the transaction signer.
  • The sub-account check actually ensures that the signer owns the specified sub-account. However, in the batching code within MsgBatchUpdateOrders, this check is not performed on three order types. This allows for complete circumvention of the security protection and gives attackers the ability to impersonate users on their operations.
  • To exploit this, an attacker would do the following:
    1. Create a worthless token.
    2. Create a spot market with FAKE/USDT.
    3. Place a sell order for FAKE/USDT. This will sell their worthless token for a valuable token.
    4. Use the vulnerability to force the victim to market buy the fake token. The attacker ends up with the valuable token.
    5. Bridge out of Injective to Ethereum with the USDT.
  • The vulnerability appears straightforward, but the aftermath wasn't. The vulnerability was submitted on November 30th, 2025. On December 1st, they fixed the issue. After a while, the white hat asked for a follow-up but got nothing until February 11th, when they confirmed its validity. On March 5th, the bug bounty program offered a $50K bounty instead of the whitehats' expected maximum payout of $500K.
  • The impact of $500M seems off to me. At the time of writing, Injective's TVL is about $12M, so I don't know where the $500M comes from. Other than this, the statements from a now-deleted tweet from Injective seem pretty off. The whitehat responded in a Tweet as well. From changing conditions later to unresponsiveness, this seemed pretty bad. Immunefi paused Injective's bug bounty program for the time being.
  • Overall, a pretty simple vulnerability that had a tremendous impact. In a bear market, it's hard to get paid for your bugs, though. I feel for the whitehat, if all of the claims are accurate.

Exploiting aToken liquidity addition in stableswap - post mortem- 1933

Jakub Panik    Reference →Posted 11 Hours Ago
  • Hydration is a money market on Polkadot. It uses the EVM-based AAVE money market, which is directly integrated into its Substrate chain. This allows users to lend and borrow tokens across both ERC-20 and Substrate-native assets.
  • AAVE introduced the concept of aTokens, yield-bearing assets that are rebasing. Over time, this led to calculation errors in some of the Hydration code when converting between tokens and aTokens. Since ERC-20 includes existential deposits (ED), dust is never cleaned up. Over time, this led to higher gas usage, performance degradation, and confusing balance displays.
  • To fix this issue, the built in Currencies library transfer() function was added. This provided a generic solution for all aToken transfers. If the remaining balance after the transfer is less then ED, the Runtime would perform an AAVE withdraw all for the recipient. This would ensure that no dust remained in the origin account. All seems good in the world!
  • This change has a reasonable implementation with atoken_balance.saturating_sub(amount);. This uses saturating math to counter cases where things underflow. In the context of this one function, it makes perfect sense. However, this change was made at a much more general level, causing unintended side-effects on the rest of the system.
  • In particular, Stableswap::add_liquidity_share() mints liquidity shares for the usre in exchange for a user-provided asset. A user could call this function with an aToken amount greater than their actual account balance. Because the transfer logic no longer fails when the user has insufficient funds, this succeeds. This allowed for an infinite mint of shares on the protocol, effectively a game-over bug.
  • Once the bug was reported to the project (2 hours), they immediately paused at the affected pools. After pausing, an emergency "stealth" upgrade was performed, which landed on mainnet 7 hours after the initial bug report. They have some interesting takeaways... first, default to checked_* in Rust instead of saturated_* functions. Second, improve testing across the board to find more of these edge cases.
  • The change was very small but had wide-ranging consequences. So, in the future, they will tag PRs with their potential scope and subsystem integration requirements. It's interesting that such a small change in a function used by other sections of code was so catastrophic.
  • In the end, they paid the reporter the maximum payout on their program ($500K) for a relatively simple bug report that could have caused a $22M loss. This was a 50% split between stablecoins and $250K of their token, vested over 20 months. The origins of the vulnerability were super interesting to me. I found that a small shared function being changed can have huge consequences, particularly interesting. Good writeup!

Out-of-Cancel: A Vulnerability Class Rooted in Workqueue Cancellation APIs- 1932

v4bel    Reference →Posted 11 Hours Ago
  • The function cancel_work_sync() can be used to stop currently running tasks in the Linux kernel, but it can be rescheduled through a separate path. Unlike tasklets, workqueue-based execution doesn't provide a reliable way to control an object's lifetime using cancellation alone. So, disable_work_sync was added to address this. However, none of this sat well with the author of this post. This subtle design led to multiple race condition vulnerabilities in the synchronous worker cancellation process.
  • The author makes a note that this isn't a missing lock or a forgotten condition: this is a fundamental design issue. The _cancel APIs are treated as a synchronization barrier for the object's lifetime. While it can stop/clean up what is running right now, it does not guarantee it will ever run again. So, they named this bug class Out of Cancel issues and seem to expect to find more of these in the Linux kernel in the future.
  • ULP (Upper Layer Protocol) is a mechanism to hook TCP for special code before or after the TCP code. This gives it a lot of flexibility but also blurs the lines of object ownership, lifetime, and execution context. Given the complexity of this and the TCP state machine, small implementation mistakes can cause other parts of TCP to behave in weird ways. One of these ULPs is ESP transport based on RFC 8229. Although they found bugs in several locations, the main focus is a CVE within espintcp.
  • Within espintcp_close the code calls cancel+work_sync() when it should call disable_work_sync(). This makes the work schedulable again, even though the function contains cleanup code. This leads to a classic use-after-free scenario. The rest of the post is all about hitting the race condition reliability and requires a deep understanding of the Linux kernel to grasp.
  • At the end of the article, they show the patches for this bug and three others that they found. In all three cases, the patch looks the same: use disable_delayed_work_sync instead of cancel_delayed_work_sync. The article is interesting, even without all of the technical concepts on binary exploitation. They found a bad design pattern and found multiple abuses of it. That's great research!

How to Harden GitHub Actions: The Unofficial Guide- 1931

Wiz    Reference →Posted 12 Hours Ago
  • GitHub Actions security has been at the forefront of security for the last few months. A vulnerability in a GitHub Action allowed the compromise of Spotbugs' GitHub PAT. This led to writing to spotbugs, leaking further secrets from spotbugs and reviewdog. From ReviewDog, they compromised TJ's actions to target real users. In this post, they dive deeper into GitHub Actions best practices.
  • Point 1 is to set the default Workflow Token Permission to read only. Prior to Feb of 2023, the default was read-write. Second, is using verified actions. These are GitHub Actions from trusted sources, such as GitHub itself or Marketplace-verified creators. If you're running self-hosted runners, you can also restrict the repositories that these run in. There's also a setting that should NEVER be enabled: "Allowing GitHub Actions to Create and Approve Pull Requests."
  • Branch Protections can also do a lot. requires specific rules, such as only trusted code in the main/release branches is good. Requires re-review upon any changes, whether from you or others.
  • Secrets have three different types: repo, organizational, and environment. Repo-level secrets are secrets that are only available to the repo, but this does make this readable to all users with write access. Environment-level offers more granular control. This can make them available only for jobs that reference the environment. They can also have approval requirements before execution.
  • Overall, a good article on hardening GitHub Actions from a security engineer's perspective.

Remote Command Execution in Google Cloud with Single Directory Deletion- 1930

ryotak    Reference →Posted 13 Hours Ago
  • Looker is a business intelligence (BI) and data analytics platform within Google Cloud. It allows connecting data sources, creating data models, and performing analytics. This product has both cloud-hosted and self-hosted versions.
  • Looker provides a feature for managing model files in Git repos, allowing users to pull or push changes to the repo. To integrate with Git, it will use JGit. When interacting with a git repo configured over SSH, Looker uses the native Git command-line tool instead. To do this, it will check out the repo and execute git commands against the directory.
  • When the .git file is deleted, it's possible to get the main part of the repo to get looked at for git commands. Using this, it's possible to execute arbitrary bash commands in files like config. As a result, when a user deletes a directory, Looker will validate the deletion request. In particular, it checks if the directory is .git.
  • The deletion command does NOT check whether or not the name is /! So, it's possible to delete the entire directory, including the git repo. At first glance, this means that the attack above wouldn't work; there's no content in the directory at all. This is where the internals of git and the Ruby file manager come into play...
  • The process of file deletion can take a long time. Since there is concurrent access to this file system, the idea is to trigger the folder deletion to remove the git repo, then make another request that triggers the RCE in the complete state. To hit this timing window, a lot of extra files need to be created, and the window is tight. It does seem like the author was able to hit this fairly consistently though.
  • Once on the box with RCE, they decided to review the container for other issues. Upon doing this, they noticed that the Kubernetes service account credentials were mounted onto the box. This allowed updating the namespace secrets, affecting other Looker instances. Because this was production and modifying these secrets would have broken something, they stopped there. Google confirmed it was possible to escalate privileges to access other instances in the same Kubernetes cluster though.
  • This report is pretty awesome for two reasons. First, the vulnerability they found required A) understanding why the protection was there and B) bypassing it. These second-order-type issues in complex systems are always a great place to look. Second, they really did right by Google. They asked for permissions at the proper times and didn't destroy anything, while finding two impactful vulnerabilities. Great find by this author!

The DMARC OR Trap: How Attackers Bypass DKIM Without Breaking a Key- 1929

Daniel Streefkerk    Reference →Posted 14 Hours Ago
  • DomainKeys Identified Mail (DKIM) is an email security standard that adds a digital signature to outgoing emails. The idea is to prove that they are the owner of the domain and that the message wasn't modified in transit. In a recent Twitter thread, the author decided to touch on how attackers actually bypass DKIM without breaking keys. The tldr; is you simply make the server stop asking for it.
  • DMARC is a standard that leverages both DKIM and SPF for security. DMARC authentication requires an OR instead of an AND. So, if either SPF and DKIM succeed, then DMARC succeeds.
  • There are two common misconfigurations of PSF that are easy to abuse. Under relaxed alignment the subdomain of a domain is also valid. For instance, anything.company.com will pass under the domain company.com. So, all it takes is a single subdomain compromise.
  • The second abuse is overly broad SPF record usage. For instance, using include:salesforce.com is bad. An attacker could simply use their own Salesforce account, and it would now be valid.
  • In response to these issues, they have a few tips for making things safer. First, don't include third-party senders in your PSF. This has too high a risk of spoofing. Second, use strict alignment on the DMARC record where possible. A great post on the real realities of email security!

The Real Cost of an Onchain Hack: 2024-2025 Update - 1928

Immunefi    Reference →Posted 14 Hours Ago
  • What happens once a protocol has been hacked? What is the long-term financial impact of it? This article discusses all of this in their second rendition of the report.
  • The frequency of hacks has plateaued. In 2024, it was 94, and in 2025, it was 97. The number of exploits has settled down into a steady number. The median amount for a hack has decreased while the average has increased. This says that each hack is smaller, but the upper end of the hacks is much larger. The top five largest hacks account for 62% of funds stolen.
  • The shock on the protocol's token is immense. Within two days, the token drops by 10%. Over six months, it drops by 53%-61%. After six months, the curve steepens. The market has a lasting penalty for security issues.
  • The impact extends beyond a single project's finances because many of these projects are interconnected. They include the example of Elixir's deUSD stablecoin. It was hacked for $93M to begin with. Because so much collateral was parked with Stream, their own stablecoin dropped in value by 77%. Stream froze withdrawals, and panic ensued, leading to a $30M dump on chain. All in all, deUSD lost more than 97% of its value, leading to Elixir being sunset.
  • Organizationally, things change as well. Security leadership leaves after a hack. The recovery period takes most of the mindshare instead of forward development. This results in three months of effort lost. Overall, a good post on the impact of a hack.

Moonwell Governance Attack- 1927

tomkysar    Reference →Posted 14 Hours Ago
  • Governance in blockchain-land works well until its economics no longer do. An attacker bought $1.8K worth of governance tokens and used them to create a fraudulent governance proposal. With the tokens they bought, it was able to go past quorum.
  • The proposal would transfer admin control of 7 lending markets, Oracles and many other things. Once an admin, the user would withdraw all of the funds. There are currently $1.08M in funds at risk. Luckily, there is a Break Glass Guardian to neutralize the attack, but it's less than ideal to stop it.